14 research outputs found

    Arrhythmia classification based on convolution neural network feature extraction and fusion

    Get PDF
    This study proposes a new automatic classification method of arrhythmias to assist doctors in diagnosing and treating arrhythmias. The convolution neural network is constructed to extract the features of ECG signals and wavelet components of QRS complex. The ECG signal features and wavelet features extracted by the network and the artificially extracted RR interval features are input to the full connection layer for fusion, and the softmax function is used to classify the beats in the output layer. The network is trained and tested using the mil lead data in MIT BIH arrhythmia database. The overall classification accuracy of this method is 98.12%, the average sensitivity is 87.32%, and the average positive predictive value is 90.37%. This method can quickly identify different types of arrhythmias, and has certain reference value for the application of computer-aided diagnosis of arrhythmias

    Regional Differential Information Entropy for Super-Resolution Image Quality Assessment

    Full text link
    PSNR and SSIM are the most widely used metrics in super-resolution problems, because they are easy to use and can evaluate the similarities between generated images and reference images. However, single image super-resolution is an ill-posed problem, there are multiple corresponding high-resolution images for the same low-resolution image. The similarities can't totally reflect the restoration effect. The perceptual quality of generated images is also important, but PSNR and SSIM do not reflect perceptual quality well. To solve the problem, we proposed a method called regional differential information entropy to measure both of the similarities and perceptual quality. To overcome the problem that traditional image information entropy can't reflect the structure information, we proposed to measure every region's information entropy with sliding window. Considering that the human visual system is more sensitive to the brightness difference at low brightness, we take γ\gamma quantization rather than linear quantization. To accelerate the method, we reorganized the calculation procedure of information entropy with a neural network. Through experiments on our IQA dataset and PIPAL, this paper proves that RDIE can better quantify perceptual quality of images especially GAN-based images.Comment: 8 pages, 9 figures, 4 table

    Image Restoration Quality Assessment Based on Regional Differential Information Entropy

    No full text
    With the development of image recovery models, especially those based on adversarial and perceptual losses, the detailed texture portions of images are being recovered more naturally. However, these restored images are similar but not identical in detail texture to their reference images. With traditional image quality assessment methods, results with better subjective perceived quality often score lower in objective scoring. Assessment methods suffer from subjective and objective inconsistencies. This paper proposes a regional differential information entropy (RDIE) method for image quality assessment to address this problem. This approach allows better assessment of similar but not identical textural details and achieves good agreement with perceived quality. Neural networks are used to reshape the process of calculating information entropy, improving the speed and efficiency of the operation. Experiments conducted with this study’s image quality assessment dataset and the PIPAL dataset show that the proposed RDIE method yields a high degree of agreement with people’s average opinion scores compared with other image quality assessment metrics, proving that RDIE can better quantify the perceived quality of images

    Buckle Pose Estimation Using a Generative Adversarial Network

    No full text
    The buckle before the lens coating is still typically disassembled manually. The difference between the buckle and the background is small, while that between the buckles is large. This mechanical disassembly can also damage the lens. Therefore, it is important to estimate pose with high accuracy. This paper proposes a buckle pose estimation method based on a generative adversarial network. An edge extraction model is designed based on a segmentation network as the generator. Spatial attention is added to the discriminator to help it better distinguish between generated and real graphs. The generator thus generates delicate external contours and center edge lines with help from the discriminator. The external rectangle and the least square methods are used to determine the center position and deflection angle of the buckle, respectively. The center point and angle accuracies of the test datasets are 99.5% and 99.3%, respectively. The pixel error of the center point distance and the absolute error of the angle to the horizontal line are within 7.36 pixels and 1.98°, respectively. This method achieves the highest center point and angle accuracies compared to Hed, RCF, DexiNed, and PidiNet. It can meet practical requirements and boost the production efficiency of lens coatings

    Prior-Driven NeRF: Prior Guided Rendering

    No full text
    Neural radiation field (NeRF)-based novel view synthesis methods are gaining popularity. NeRF can generate more detailed and realistic images than traditional methods. Conventional NeRF reconstruction of a room scene requires at least several hundred images as input data and generates several spatial sampling points, placing a tremendous burden on the training and prediction process with respect to memory and computational time. To address these problems, we propose a prior-driven NeRF model that only accepts sparse views as input data and reduces a significant number of non-functional sampling points to improve training and prediction efficiency and achieve fast high-quality rendering. First, this study uses depth priors to guide sampling, and only a few sampling points near the controllable range of the depth prior are used as input data, which reduces the memory occupation and improves the efficiency of training and prediction. Second, this study encodes depth priors as distance weights into the model and guides the model to quickly fit the object surface. Finally, a novel approach combining the traditional mesh rendering method (TMRM) and the NeRF volume rendering method was used to further improve the rendering efficiency. Experimental results demonstrated that our method had significant advantages in the case of sparse input views (11 per room) and few sampling points (8 points per ray)

    An Unknown Hidden Target Localization Method Based on Data Decoupling in Complex Scattering Media

    No full text
    Due to the effect of the complex scattering medium, the photons carrying target information will be attenuated when passing through scattering media, and target localization is difficult. The resolution of the target-position information from scattered images is crucial for achieving accurate target localization in environments such as dense fog in military applications. In this paper, a target localization network incorporating an attention mechanism was designed based on the robust feature resolution ability of neural networks and the characteristics of scattering formation. A training dataset with basic elements was constructed to achieve data decoupling, and then realize the position estimation of targets in different domains in complex scattering environments. Experimental validation showed that the target was accurately localized in speckle images with different domain data by the above method. The results will provide ideas for future research on the localization of typical targets in natural scattering environments

    An Unknown Hidden Target Localization Method Based on Data Decoupling in Complex Scattering Media

    No full text
    Due to the effect of the complex scattering medium, the photons carrying target information will be attenuated when passing through scattering media, and target localization is difficult. The resolution of the target-position information from scattered images is crucial for achieving accurate target localization in environments such as dense fog in military applications. In this paper, a target localization network incorporating an attention mechanism was designed based on the robust feature resolution ability of neural networks and the characteristics of scattering formation. A training dataset with basic elements was constructed to achieve data decoupling, and then realize the position estimation of targets in different domains in complex scattering environments. Experimental validation showed that the target was accurately localized in speckle images with different domain data by the above method. The results will provide ideas for future research on the localization of typical targets in natural scattering environments

    Data-Decoupled Scattering Imaging Method Based on Autocorrelation Enhancement

    No full text
    Target recovery through scattering media is an important aspect of optical imaging. Although various algorithms combining deep-learning methods for target recovery through scattering media exist, they have limitations in terms of robustness and generalization. To address these issues, this study proposes a data-decoupled scattering imaging method based on autocorrelation enhancement. This method constructs basic-element datasets, acquires the speckle images corresponding to these elements, and trains a deep-learning model using the autocorrelation images generated from the elements using speckle autocorrelation as prior physical knowledge to achieve the scattering recovery imaging of targets across data domains. To remove noise terms and enhance the signal-to-noise ratio, a deep-learning model based on the encoder–decoder structure was used to recover a speckle autocorrelation image with a high signal-to-noise ratio. Finally, clarity reconstruction of the target is achieved by applying the traditional phase-recovery algorithm. The results demonstrate that this process improves the peak signal-to-noise ratio of the data from 15 to 37.28 dB and the structural similarity from 0.38 to 0.99, allowing a clear target image to be reconstructed. Meanwhile, supplementary experiments on the robustness and generalization of the method were conducted, and the results prove that it performs well on frosted glass plates with different scattering characteristics

    Comparative genomic analysis of Chinese human leptospirosis vaccine strain and circulating isolate

    No full text
    Leptospira interrogans serogroup Canicola is one of the most important pathogens causing leptospirosis and is used as a vaccine strain of the current Chinese human leptospirosis vaccine. To characterize leptospiral pathogens, L. interrogans serogroup Canicola vaccine strain 611 and circulating isolate LJ178 from different hosts at different periods were sequenced using a combined strategy of Illumina X10 and PacBio technologies, and a comprehensive comparative analysis with other published Leptospira strains was conducted in this study. High levels of genomic similarities were observed between vaccine strain 611 and circulating isolate LJ178; both had two circular chromosomes and two circular extrachromosomal replicons. Compared with the strain 611 genome, 132 single nucleotide polymorphisms and 92 indels were found in strain LJ178. The larger lipopolysaccharide biosynthesis locus of serogroup Canicola was identified in both genomes. The phylogenetic analysis based on whole-genome sequences revealed that serogroup Canicola was not restricted to a specific host or geographic location, suggesting adaptive evolution associated with the ecologic diversity. In summary, our findings provide insights into a better molecular understanding of the component strains of human leptospirosis vaccine in China. Furthermore, these data detail the genetic composition and evolutionary relatedness of Leptospira strains that pose a health risk to humans
    corecore